Goto

Collaborating Authors

 reconstructing space


Self-Supervised Deep Learning on Point Clouds by Reconstructing Space

Neural Information Processing Systems

Point clouds provide a flexible and natural representation usable in countless applications such as robotics or self-driving cars. Recently, deep neural networks operating on raw point cloud data have shown promising results on supervised learning tasks such as object classification and semantic segmentation. While massive point cloud datasets can be captured using modern scanning technology, manually labelling such large 3D point clouds for supervised learning tasks is a cumbersome process. This necessitates methods that can learn from unlabelled data to significantly reduce the number of annotated samples needed in supervised learning. We propose a self-supervised learning task for deep learning on raw point cloud data in which a neural network is trained to reconstruct point clouds whose parts have been randomly rearranged. While solving this task, representations that capture semantic properties of the point cloud are learned. Our method is agnostic of network architecture and outperforms current unsupervised learning approaches in downstream object classification tasks. We show experimentally, that pre-training with our method before supervised training improves the performance of state-of-the-art models and significantly improves sample efficiency.


Reviews: Self-Supervised Deep Learning on Point Clouds by Reconstructing Space

Neural Information Processing Systems

Originality: This paper is a novel combination of an existing method [7,21] for 2D images, to an existing task (point cloud feature learning). Given the success of [21], one would expect it also works for 3D representation where the spatial layout is equally or more important, which is confirmed by the results in this paper. The citations in this paper sufficiently cover related work. Quality: Most of the experimental results appear to be meaningful and support claimed advantages of this method: architecture-agnostic, avoids reconstruction metric, helps supervised down-stream tasks. But the comparison to alternative methods in Table 1 is weakened by the fact that model architectures used by the baseline methods are not mentioned.


Reviews: Self-Supervised Deep Learning on Point Clouds by Reconstructing Space

Neural Information Processing Systems

The paper received weak but still positive support from reviewers. The main concern was limited novelty on transferring work into 2D computer vision to 3D computer vision. However, the simplicity of the approach is a strength, and the approach seems to work well, which the reviewers generally agree on.


Self-Supervised Deep Learning on Point Clouds by Reconstructing Space

Neural Information Processing Systems

Point clouds provide a flexible and natural representation usable in countless applications such as robotics or self-driving cars. Recently, deep neural networks operating on raw point cloud data have shown promising results on supervised learning tasks such as object classification and semantic segmentation. While massive point cloud datasets can be captured using modern scanning technology, manually labelling such large 3D point clouds for supervised learning tasks is a cumbersome process. This necessitates methods that can learn from unlabelled data to significantly reduce the number of annotated samples needed in supervised learning. We propose a self-supervised learning task for deep learning on raw point cloud data in which a neural network is trained to reconstruct point clouds whose parts have been randomly rearranged.


Self-Supervised Deep Learning on Point Clouds by Reconstructing Space

Sauder, Jonathan, Sievers, Bjarne

Neural Information Processing Systems

Point clouds provide a flexible and natural representation usable in countless applications such as robotics or self-driving cars. Recently, deep neural networks operating on raw point cloud data have shown promising results on supervised learning tasks such as object classification and semantic segmentation. While massive point cloud datasets can be captured using modern scanning technology, manually labelling such large 3D point clouds for supervised learning tasks is a cumbersome process. This necessitates methods that can learn from unlabelled data to significantly reduce the number of annotated samples needed in supervised learning. We propose a self-supervised learning task for deep learning on raw point cloud data in which a neural network is trained to reconstruct point clouds whose parts have been randomly rearranged.